5 research outputs found

    Good for learning, bad for motivation? A meta-analysis on the effects of computer-supported collaboration scripts

    Get PDF
    Scripting computer-supported collaborative learning has been shown to greatly enhance learning, but is often criticized for hindering learners’ agency and thus undermining learners’ motivation. Beyond that, what makes some CSCL scripts particularly effective for learning is still a conundrum. This meta-analysis synthesizes the results of 53 primary studies that experimentally compared the effect of learning with a CSCL script to unguided collaborative learning on at least one of the variables motivation, domain learning, and collaboration skills. Overall, 5616 learners enrolled in K-12, higher education, or professional development participated in the included studies. The results of a random-effects meta-analysis show that learning with CSCL scripts leads to a non-significant positive effect on motivation (Hedges’ g = 0.13), a small positive effect (Hedges’ g = 0.24) on domain learning and a medium positive effect (Hedges’ g = 0.72) on collaboration skills. Additionally, the meta-analysis shows how scaffolding single particular collaborative activities and scaffolding a combination of collaborative activities affects the effectiveness of CSCL scripts and that synergistic or differentiated scaffolding is hard to achieve. This meta-analysis offers the first counterevidence against the widespread criticism that CSCL scripts have negative motivational effects. Furthermore, the findings can be taken as evidence for the robustness of the positive effects on domain learning and collaboration skills

    Learning to diagnose collaboratively – Effects of adaptive collaboration scripts in agent-based medical simulations

    Get PDF
    We investigated how medical students' collaborative diagnostic reasoning, particularly evidence elicitation and sharing, can be facilitated effectively using agent-based simulations. Providing adaptive collaboration scripts has been suggested to increase effectiveness, but existing evidence is diverse and could be affected by unsystematic group constellations. Collaboration scripts have been criticized for undermining learners' agency. We investigate the effect of adaptive and static scripts on collaborative diagnostic reasoning and basic psychological needs. We randomly allocated 160 medical students to one of three groups: adaptive, static, or no collaboration script. We found that learning with adaptive collaboration scripts enhanced evidence sharing performance and transfer performance. Scripting did not affect learners’ perceived autonomy and social relatedness. Yet, compared to static scripts, adaptive scripts had positive effects on perceived competence. We conclude that for complex skills complementing agent-based simulations with adaptive scripts seems beneficial to help learners internalize collaboration scripts without negatively affecting basic psychological needs

    Facilitating collaborative diagnostic reasoning

    Get PDF

    Simulation research and design: a dual-level framework for multi-project research programs

    Get PDF
    Collaborations between researchers and practitioners have recently become increasingly popular in education, and educational design research (EDR) may benefit greatly from investigating such partnerships. One important domain in which EDR on collaborations between researchers and practitioners can be applied is research on simulation-based learning. However, frameworks describing both research and design processes in research programs on simulation-based learning are currently lacking. The framework proposed in this paper addresses this research gap. It is derived from theory and delineates levels, phases, activities, roles, and products of research programs to develop simulations as complex scientific artifacts for research purposes. This dual-level framework applies to research programs with a research committee and multiple subordinate research projects. The proposed framework is illustrated by examples from the actual research and design process of an interdisciplinary research program investigating the facilitation of diagnostic competences through instructional support in simulations. On a theoretical level, the framework contributes primarily to the literature of EDR by offering a unique dual-level perspective. Moreover, on a practical level, the framework may help by providing recommendations to guide the research and design process in research programs

    Who is on the right track? Behavior-based prediction of diagnostic success in a collaborative diagnostic reasoning simulation

    No full text
    Abstract Background Making accurate diagnoses in teams requires complex collaborative diagnostic reasoning skills, which require extensive training. In this study, we investigated broad content-independent behavioral indicators of diagnostic accuracy and checked whether and how quickly diagnostic accuracy could be predicted from these behavioral indicators when they were displayed in a collaborative diagnostic reasoning simulation. Methods A total of 73 medical students and 25 physicians were asked to diagnose patient cases in a medical training simulation with the help of an agent-based radiologist. Log files were automatically coded for collaborative diagnostic activities (CDAs; i.e., evidence generation, sharing and eliciting of evidence and hypotheses, drawing conclusions). These codes were transformed into bigrams that contained information about the time spent on and transitions between CDAs. Support vector machines with linear kernels, random forests, and gradient boosting machines were trained to classify whether a diagnostician could provide the correct diagnosis on the basis of the CDAs. Results All algorithms performed well in predicting diagnostic accuracy in the training and testing phases. Yet, the random forest was selected as the final model because of its better performance (kappa = .40) in the testing phase. The model predicted diagnostic success with higher precision than it predicted diagnostic failure (sensitivity = .90; specificity = .46). A reliable prediction of diagnostic success was possible after about two thirds of the median time spent on the diagnostic task. Most important for the prediction of diagnostic accuracy was the time spent on certain individual activities, such as evidence generation (typical for accurate diagnoses), and collaborative activities, such as sharing and eliciting evidence (typical for inaccurate diagnoses). Conclusions This study advances the understanding of differences in the collaborative diagnostic reasoning processes of successful and unsuccessful diagnosticians. Taking time to generate evidence at the beginning of the diagnostic task can help build an initial adequate representation of the diagnostic case that prestructures subsequent collaborative activities and is crucial for making accurate diagnoses. This information could be used to provide adaptive process-based feedback on whether learners are on the right diagnostic track. Moreover, early instructional support in a diagnostic training task might help diagnosticians improve such individual diagnostic activities and prepare for effective collaboration. In addition, the ability to identify successful diagnosticians even before task completion might help adjust task difficulty to learners in real time
    corecore